9 research outputs found

    A False Acceptance Error Controlling Method for Hyperspherical Classifiers

    Get PDF
    Controlling false acceptance errors is of critical importance in many pattern recognition applications, including signature and speaker verification problems. Toward this goal, this paper presents two post-processing methods to improve the performance of hyperspherical classifiers in rejecting patterns from unknown classes. The first method uses a self-organizational approach to design minimum radius hyperspheres, reducing the redundancy of the class region defined by the hyperspherical classifiers. The second method removes additional redundant class regions from the hyperspheres by using a clustering technique to generate a number of smaller hyperspheres. Simulation and experimental results demonstrate that by removing redundant regions these two post-processing methods can reduce the false acceptance error without significantly increasing the false rejection error

    A Training Sample Sequence Planning Method for Pattern Recognition Problems

    Get PDF
    In solving pattern recognition problems, many classification methods, such as the nearest-neighbor (NN) rule, need to determine prototypes from a training set. To improve the performance of these classifiers in finding an efficient set of prototypes, this paper introduces a training sample sequence planning method. In particular, by estimating the relative nearness of the training samples to the decision boundary, the approach proposed here incrementally increases the number of prototypes until the desired classification accuracy has been reached. This approach has been tested with a NN classification method and a neural network training approach. Studies based on both artificial and real data demonstrate that higher classification accuracy can be achieved with fewer prototypes

    One-Class-at-a-Time Removal Sequence Planning Method for Multiclass Classification Problems

    Get PDF
    Using dynamic programming, this work develops a one-class-at-a-time removal sequence planning method to decompose a multiclass classification problem into a series of two-class problems. Compared with previous decomposition methods, the approach has the following distinct features. First, under the one-class-at-a-time framework, the approach guarantees the optimality of the decomposition. Second, for a K-class problem, the number of binary classifiers required by the method is only K-1. Third, to achieve higher classification accuracy, the approach can easily be adapted to form a committee machine. A drawback of the approach is that its computational burden increases rapidly with the number of classes. To resolve this difficulty, a partial decomposition technique is introduced that reduces the computational cost by generating a suboptimal solution. Experimental results demonstrate that the proposed approach consistently outperforms two conventional decomposition methods

    A vector quantization method for nearest neighbor classifier design

    Get PDF
    This paper proposes a nearest neighbor classifier design method based on vector quantization (VQ). By investigating the error distribution pattern of the training set, the VQ technique is applied to generate prototypes incrementally until the desired classification result is reached. Experimental results demonstrate the effectiveness of the method
    corecore